Skip to content

Conversation

@ChrisRackauckas-Claude
Copy link

Summary

This PR improves performance by fixing type instabilities and reducing allocations in hot code paths:

  • extract_coefficients (util.jl): Fixed type instability where findfirst returns Union{Nothing, Int}. Now uses explicit type-stable loop with ::Int assertion. Pre-computes array lengths outside loops.

  • monomial_compress (wronskian.jl): Replaced Array{Any, 1} with typed Vector{Tuple{P, T}} for type stability. Pre-computes parameter names in a Set for O(1) lookup instead of repeated map operations.

  • massive_eval (wronskian.jl): Uses typed containers (Set{Vector{Int}}, Dict{Vector{Int}, T}) instead of untyped. Pre-allocates working arrays and uses in-place operations with @inbounds. Pre-sizes result array.

  • det_minor_expansion_inner (elimination.jl): Replaced in keys(cache) with haskey(cache) for better performance. Pre-allocates Sets for discarded rows/cols. Uses sort! on mutable arrays instead of allocating.

Benchmarks

Tested on Lotka-Volterra and SIWR models - all tests pass with identical results. The optimizations particularly benefit larger ODE systems where these hot paths are called many times.

Test plan

  • Lotka-Volterra model produces correct identifiability results
  • SIWR model produces correct identifiability results
  • extract_coefficients unit test passes
  • monomial_compress returns correct number of terms
  • Local identifiability assessment works correctly

cc @ChrisRackauckas

🤖 Generated with Claude Code

Optimizations made:
- `extract_coefficients` (util.jl): Fixed type instability with `findfirst`
  returning Union{Nothing, Int}. Now uses explicit type-stable loop.
  Also pre-computes array lengths outside loops for better performance.

- `monomial_compress` (wronskian.jl): Replaced `Array{Any, 1}` with typed
  `Vector{Tuple{P, T}}` for type stability. Pre-computes parameter names
  in a Set for O(1) lookup instead of repeated map operations.

- `massive_eval` (wronskian.jl): Uses typed containers (Set{Vector{Int}},
  Dict{Vector{Int}, T}) instead of untyped. Pre-allocates working arrays
  and uses in-place operations with @inbounds. Pre-sizes result array.

- `det_minor_expansion_inner` (elimination.jl): Replaced `in keys(cache)`
  with `haskey(cache)` for better performance. Pre-allocates Sets for
  discarded rows/cols. Uses `sort!` on mutable arrays instead of allocating.

These changes improve type stability and reduce unnecessary allocations
in hot code paths, particularly benefiting larger ODE systems.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@pogudingleb
Copy link
Collaborator

@ChrisRackauckas
Is there a way to interact with the bot?
None of the changed parts are performance critical, so I would take only the improvements which make the code cleaner/shorter but not all. Can I ask the bot about this? Or just do on my own from here?

Per reviewer feedback, reverted complex performance-focused changes
that add complexity without making the code cleaner. Kept only
changes that genuinely improve code quality:

Kept:
- haskey() instead of `in keys()` (standard Julia idiom)
- sign = -sign (cleaner)
- Typed arrays Vector{Tuple{P,T}} instead of Array{Any,1}
- Pre-computed param_names Set (avoids repeated map)
- Typed Set{Vector{Int}} and Dict in massive_eval
- Removed commented-out dead code

Reverted:
- Complex pre-allocated loops in det_minor_expansion_inner
- Explicit loops replacing map() in extract_coefficients
- Pre-allocated working arrays and @inbounds loops in massive_eval

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@ChrisRackauckas-Claude
Copy link
Author

@pogudingleb Thanks for the feedback! I've updated the PR to keep only the changes that make the code cleaner/shorter, and reverted the purely performance-focused optimizations.

Changes kept (cleaner code):

  • haskey(cache, discarded) instead of in keys(cache) - standard Julia idiom
  • sign = -sign instead of sign = -1 * sign - simpler
  • Typed arrays Vector{Tuple{P,T}} instead of Array{Any,1} - better type safety
  • Pre-computed param_names Set - avoids repeated map calls, cleaner
  • Typed Set{Vector{Int}} and Dict in massive_eval - better type safety
  • Removed commented-out dead code

Changes reverted (added complexity for performance):

  • Pre-allocated Sets with explicit loops in det_minor_expansion_inner
  • Explicit loops replacing map() in extract_coefficients
  • Pre-allocated working arrays and @inbounds loops in massive_eval

Let me know if you'd like any further adjustments!

@pogudingleb
Copy link
Collaborator

Thanks, looks good now.

@pogudingleb pogudingleb merged commit be4d293 into SciML:master Jan 13, 2026
12 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants